63 research outputs found

    A Note on The Backfitting Estimation of Additive Models

    Full text link
    The additive model is one of the most popular semiparametric models. The backfitting estimation (Buja, Hastie and Tibshirani, 1989, \textit{Ann. Statist.}) for the model is intuitively easy to understand and theoretically most efficient (Opsomer and Ruppert, 1997, \textit{Ann. Statist.}); its implementation is equivalent to solving simple linear equations. However, convergence of the algorithm is very difficult to investigate and is still unsolved. For bivariate additive models, Opsomer and Ruppert (1997, \textit{Ann. Statist.}) proved the convergence under a very strong condition and conjectured that a much weaker condition is sufficient. In this short note, we show that a weak condition can guarantee the convergence of the backfitting estimation algorithm when the Nadaraya-Watson kernel smoothing is used

    AN ADAPTIVE COMPOSITE QUANTILE APPROACH TO DIMENSION REDUCTION

    Get PDF
    Sufficient dimension reduction [Li 1991] has long been a prominent issue in multivariate nonparametric regression analysis. To uncover the central dimension reduction space, we propose in this paper an adaptive composite quantile approach. Compared to existing methods, (1) it requires minimal assumptions and is capable of revealing all dimension reduction directions; (2) it is robust against outliers and (3) it is structure-adaptive, thus more efficient. Asymptotic results are proved and numerical examples are provided, including a real data analysis

    Optimal Smoothing for a Computationally and Statistically Efficient Single Index Estimator

    Get PDF
    In semiparametric models it is a common approach to under-smooth the nonparametric functions in order that estimators of the finite dimensional parameters can achieve root-n consistency. The requirement of under-smoothing may result as we show from inefficient estimation methods or technical difficulties. Based on local linear kernel smoother, we propose an estimation method to estimate the single-index model without under-smoothing. Under some conditions, our estimator of the single-index is asymptotically normal and most efficient in the semi-parametric sense. Moreover, we derive higher expansions for our estimator and use them to define an optimal bandwidth for the purposes of index estimation. As a result we obtain a practically more relevant method and we show its superior performance in a variety of applications.ADE, Asymptotics, Bandwidth, MAVE method, Semi-parametric efficiency

    Optimal Smoothing for a Computationallyand StatisticallyEfficient Single Index Estimator

    Get PDF
    In semiparametric models it is a common approach to under-smooth thenonparametric functions in order that estimators of the finite dimensionalparameters can achieve root-n consistency. The requirement of under-smoothingmay result as we show from inefficient estimation methods or technical difficulties.Based on local linear kernel smoother, we propose an estimation method toestimate the single-index model without under-smoothing. Under some conditions,our estimator of the single-index is asymptotically normal and most efficient in thesemi-parametric sense. Moreover, we derive higher expansions for our estimatorand use them to define an optimal bandwidth for the purposes of index estimation.As a result we obtain a practically more relevant method and we show its superiorperformance in a variety of applications.ADE, Asymptotics, Bandwidth, MAVE method, Semiparametricefficiency.

    Global Bahadur representation for nonparametric censored regression quantiles and its applications

    Get PDF
    This paper is concerned with the nonparametric estimation of regression quantiles where the response variable is randomly censored. Using results on the strong uniform convergence of U-processes, we derive a global Bahadur representation for the weighted local polynomial estimators, which is sufficiently accurate for many further theoretical analyses including inference. We consider two applications in detail: estimation of the average derivative, and estimation of the component functions in additive quantile regression models.

    Uniform Bahadur Representation for Local Polynomial Estimates of M-Regression and Its Application to The Additive Model

    Get PDF
    We use local polynomial fitting to estimate the nonparametric M-regression function for strongly mixing stationary processes {(Yi,Xi)}\{(Y_{i},\underline{X}_{i})\}. We establish a strong uniform consistency rate for the Bahadur representation of estimators of the regression function and its derivatives. These results are fundamental for statistical inference and for applications that involve plugging in such estimators into other functionals where some control over higher order terms are required. We apply our results to the estimation of an additive M-regression model.Comment: 40 page

    Uniform Bahadur Representation for LocalPolynomial Estimates of M-Regressionand Its Application to The Additive Model

    Get PDF
    We use local polynomial fitting to estimate the nonparametric M-regression function for strongly mixing stationary processes {(Y_i,?X_i ) } . We establish a strong uniform consistency rate for the Bahadur representation of estimators of the regression function and its derivatives. These results are fundamental for statistical inference and for applications that involve plugging such estimators into other functional where some control over higher order terms are required. We apply our results to the estimation of an additive M-regression model.

    Consistency of Oblique Decision Tree and its Boosting and Random Forest

    Full text link
    Classification and Regression Tree (CART), Random Forest (RF) and Gradient Boosting Tree (GBT) are probably the most popular set of statistical learning methods. However, their statistical consistency can only be proved under very restrictive assumptions on the underlying regression function. As an extension to standard CART, the oblique decision tree (ODT), which uses linear combinations of predictors as partitioning variables, has received much attention. ODT tends to perform numerically better than CART and requires fewer partitions. In this paper, we show that ODT is consistent for very general regression functions as long as they are continuous. Then, we prove the consistency of the ODT-based random forest (ODRF), whether fully grown or not. Finally, we propose an ensemble of GBT for regression by borrowing the technique of orthogonal matching pursuit and study its consistency under very mild conditions on the tree structure. After refining existing computer packages according to the established theory, extensive experiments on real data sets show that both our ensemble boosting trees and ODRF have noticeable overall improvements over RF and other forests

    Quantile Estimation of A general Single-Index Model

    Get PDF
    The single-index model is one of the most popular semiparametric models in Econometrics. In this paper, we define a quantile regression single-index model, which includes the single-index structure for conditional mean and for conditional variance.Comment: 32page
    corecore